14 research outputs found

    Understanding Group Structures and Properties in Social Media

    Full text link
    Abstract. The rapid growth of social networking sites enables people to connect to each other more conveniently than ever. With easy-to-use social media, people contribute and consume contents, leading to a new form of human interaction and the emergence of online collective behav-ior. In this chapter, we aim to understand group structures and proper-ties by extracting and profiling communities in social media. We present some challenges of community detection in social media. A prominent one is that networks in social media are often heterogeneous. We intro-duce two types of heterogeneity presented in online social networks and elaborate corresponding community detection approaches for each type, respectively. Social media provides not only interaction information but also textual and tag data. This variety of data can be exploited to profile individual groups in understanding group formation and relationships. We also suggest some future work in understanding group structures and properties. Key words: social media, community detection, group profiling, het-erogeneous networks, multi-mode networks, multi-dimensional networks

    Guest Editorial: Learning from multiple sources

    No full text

    Orhon Yazıtları (Kƶl Tegin, Bilge Kağan, Tonyukuk, Ongi, KĆ¼li Ƈor).

    No full text
    The automatic annotation of images presents a particularly complex problem for machine learning researchers. In this work we experiment with semantic models and multi-class learning for the automatic annotation of query images. We represent the images using scale invariant transformation descriptors in order to account for similar objects appearing at slightly different scales and transformations. The resulting descriptors are utilised as visual terms for each image. We first aim to annotate query images by retrieving images that are similar to the query image. This approach uses the analogy that similar images would be annotated similarly as well. We then propose an image annotation method that learns a direct mapping from image descriptors to keywords. We compare the semantic based methods of Latent Semantic Indexing and Kernel Canonical Correlation Analysis (KCCA), as well as using a recently proposed vector label based learning method known as Maximum Margin Robot

    Multi-modal Correlation Modeling and Ranking for Retrieval

    No full text
    corecore